Optimal Sparse Linear Auto-Encoders and Sparse PCA

نویسندگان

  • Malik Magdon-Ismail
  • Christos Boutsidis
چکیده

Principal components analysis (PCA) is the optimal linear auto-encoder of data, and it is often used to construct features. Enforcing sparsity on the principal components can promote better generalization, while improving the interpretability of the features. We study the problem of constructing optimal sparse linear auto-encoders. Two natural questions in such a setting are: (i) Given a level of sparsity, what is the best approximation to PCA that can be achieved? (ii) Are there low-order polynomial-time algorithmswhich can asymptotically achieve this optimal tradeoff between the sparsity and the approximation quality? In this work, we answer both questions by giving efficient low-order polynomial-time algorithms for constructing asymptotically optimal linear auto-encoders (in particular, sparse features with near-PCA reconstruction error) and demonstrate the performance of our algorithms on real data.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Sparse Linear Encoders and Sparse PCA

Principal components analysis (PCA) is the optimal linear encoder of data. Sparse linear encoders (e.g., sparse PCA) produce more interpretable features that can promote better generalization. (i) Given a level of sparsity, what is the best approximation to PCA? (ii) Are there efficient algorithms which can achieve this optimal combinatorial tradeoff? We answer both questions by providing the f...

متن کامل

Discriminative Recurrent Sparse Auto-Encoders

We present the discriminative recurrent sparse auto-encoder model, comprising a recurrent encoder of rectified linear units, unrolled for a fixed number of iterations, and connected to two linear decoders that reconstruct the input and predict its supervised classification. Training via backpropagation-through-time initially minimizes an unsupervised sparse reconstruction error; the loss functi...

متن کامل

Saturating Auto-Encoders

We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability t...

متن کامل

Why Regularized Auto-Encoders learn Sparse Representation?

Although a number of auto-encoder models enforce sparsity explicitly in their learned representation while others don’t, there has been little formal analysis on what encourages sparsity in these models in general. Therefore, our objective here is to formally study this general problem for regularized auto-encoders. We show that both regularization and activation function play an important role...

متن کامل

Deep Neural Networks for Iris Recognition System Based on Video: Stacked Sparse Auto Encoders (ssae) and Bi-propagation Neural Network Models

Iris recognition technique is now regarded among the most trustworthy biometrics tactics. This is basically ascribed to its extraordinary consistency in identifying individuals. Moreover, this technique is highly efficient because of iris’ distinctive characteristics and due to its ability to protect the iris against environmental and aging effects. The Problem statement of this work is that th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1502.06626  شماره 

صفحات  -

تاریخ انتشار 2015